Goto

Collaborating Authors

 randomized objective function



Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB). We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.



Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB).


Reviews: Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

The idea: You want to do Bayesian inference on a parameter theta, with prior pi(theta) and parametric likelihood f_theta, but you're not sure if the likelihood is correctly specified. So put a nonparametric prior on the sampling distribution: a mixture of Dirichlet processes centered at f_theta with mixing distribution pi(theta). The concentration parameter of the DP provides a sliding scale between vanilla Bayesian inference (total confidence in the parametric model) and Bayesian bootstrap (no confidence at all, use the empirical distribution). This is a simple idea, but the paper presents it lucidly and compellingly, beginning with a diverse list of potential applications: the method may be viewed as regularization of a nonparametric Bayesian model towards a parametric one; as robustification of a parametric Bayesian model to misspecification; as a means of correcting a variational approximation; or as nonparametric decision theory, when the log-likelihood is swapped out for an arbitrary utility function. As for implementation, the procedure requires (1) sampling from the parametric Bayesian posterior distribution and (2) performing a p-dimensional maximization, where p is the dimension of theta.


Nonparametric learning from Bayesian models with randomized objective functions

Lyddon, Simon, Walker, Stephen, Holmes, Chris C.

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB).


Nonparametric learning from Bayesian models with randomized objective functions

Lyddon, Simon, Walker, Stephen, Holmes, Chris C.

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB). We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.


Nonparametric learning from Bayesian models with randomized objective functions

Lyddon, Simon, Walker, Stephen, Holmes, Chris C.

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB). We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.


Nonparametric learning from Bayesian models with randomized objective functions

Lyddon, S. P., Walker, S. G., Holmes, C. C.

arXiv.org Machine Learning

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a trivially parallelizable Monte Carlo sampling scheme that affords massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB). We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.